115 research outputs found

    Naive Realism about Operators

    Get PDF
    A source of much difficulty and confusion in the interpretation of quantum mechanics is a ``naive realism about operators.'' By this we refer to various ways of taking too seriously the notion of operator-as-observable, and in particular to the all too casual talk about ``measuring operators'' that occurs when the subject is quantum mechanics. Without a specification of what should be meant by ``measuring'' a quantum observable, such an expression can have no clear meaning. A definite specification is provided by Bohmian mechanics, a theory that emerges from Sch\"rodinger's equation for a system of particles when we merely insist that ``particles'' means particles. Bohmian mechanics clarifies the status and the role of operators as observables in quantum mechanics by providing the operational details absent from standard quantum mechanics. It thereby allows us to readily dismiss all the radical claims traditionally enveloping the transition from the classical to the quantum realm---for example, that we must abandon classical logic or classical probability. The moral is rather simple: Beware naive realism, especially about operators!Comment: 18 pages, LaTex2e with AMS-LaTeX, to appear in Erkenntnis, 1996 (the proceedings of the international conference ``Probability, Dynamics and Causality,'' Luino, Italy, 15-17 June 1995, a special issue edited by D. Costantini and M.C. Gallavotti and dedicated to Prof. R. Jeffrey

    A Review on Joint Models in Biometrical Research

    Get PDF
    In some fields of biometrical research joint modelling of longitudinal measures and event time data has become very popular. This article reviews the work in that area of recent fruitful research by classifying approaches on joint models in three categories: approaches with focus on serial trends, approaches with focus on event time data and approaches with equal focus on both outcomes. Typically longitudinal measures and event time data are modelled jointly by introducing shared random effects or by considering conditional distributions together with marginal distributions. We present the approaches in an uniform nomenclature, comment on sub-models applied to longitudinal measures and event time data outcomes individually and exemplify applications in biometrical research

    Average Run Length and Mean Delay for Changepoint Detection: Robust Estimates for Threshold Alarms

    Get PDF
    Online Monitoring is a rapidly expanding field in different areas such as quality control, finance and navigation. The automated detection of so-called changepoints is playing a prominent role in all these fields, be it the detection of sudden shifts of the mean of a continuously monitored quantity, the variance of stock quotes or the change of some characteristic features indicating the malfunctioning of one of the detectors used for navigation (the ``faulty sensor problem''). A prominent example for the application of advanced statistical methods for the detection of changepoints in biomedical time series is the multi-process Kalman filter used by Smith and West [Smith 1983] to monitor renal transplants. However, despite the fact that the algorithm could be tuned in such a way that the computer could predict dangerous situations on the average one day before the human experts it has nevertheless become superfluous as soon as new diagnosic tools became available. Many of the automated monitoring systems which are widely used in practice are based on simple threshold alarms. Some upper and lower limits are chosen at the beginning of the monitoring session and an alarm is triggered whenever the measured values exceed the upper limit or fall below the lower limit. This is e.g. common practice for the monitoring of patients during surgery, where such thresholds are chosen for heart rate, blood pressure etc. by the anaesthesist. The fate of the multi-process Kalman filter for monitoring renal transplants teaches two lessons: first, there is considerable power in statistical methods to improve conventional biomedical monitoring techniques. Second, if the statistical model and the methods are too refined they may never be used in practice. We shall suggest a stochastic model for changepoints which we have found to have the capacity to be very useful in practice, i.e. which is sufficiently complex to cover the important features of a changepoint system but simple enough to be understandable and adaptible. We focus our attention on the properties of the threshold alarm for different values of the parameters of the threshold alarm and the model. This will give us practically relevant estimates for this important class of alarm systems and moreover a benchmark for the evaluation of competing alternative algorithms. Note that virtually every algorithm designed to detect changepoints is based on a threshold alarm, the only difference being that the threshold alarm is not fed with the original data but by a transformation thereof, usually called ``residuum'' [Basseville 1993]. As a general measure for quality, we look on the one hand at the mean delay time τ\tau between a changepoint and its detection and on the other hand at the mean waiting time for a false alarm, the so-called average run length ARL

    The Message of the Quantum?

    Get PDF
    We criticize speculations to the effect that quantum mechanics is fundamentally about information. We do this by pointing out how unfounded such speculations in fact are. Our analysis focuses on the dubious claims of this kind recently made by Anton Zeilinger

    Multiple sclerosis, the measurement of disability and access to clinical trial data

    Get PDF
    Background: Inferences about long-term effects of therapies in multiple sclerosis (MS) have been based on surrogate markers studied in short-term trials. Nevertheless, MS trials have been getting steadily shorter despite the lack of a consensus definition for the most important clinical outcome - unremitting progression of disability. Methods: We have examined widely used surrogate markers of disability progression in MS within a unique database of individual patient data from the placebo arms of 31 randomised clinical trials. Findings: Definitions of treatment failure used in secondary progressive MS trials include much change unrelated to the target of unremitting disability. In relapsing-remitting MS, disability progression by treatment failure definitions was no more likely than similarly defined improvement for these disability surrogates. Existing definitions of disease progression in relapsing-remitting trials encompass random variation, measurement error and remitting relapses and appear not to measure unremitting disability. Interpretation: Clinical surrogates of unremitting disability used in relapsing -remitting trials cannot be validated. Trials have been too short and/or degrees of disability change too small to evaluate unremitting disability outcomes. Important implications for trial design and reinterpretation of existing trial results have emerged long after regulatory approval and widespread use of therapies in MS, highlighting the necessity of having primary trial data in the public domain

    On-line monitoring using Multi-Process Kalman Filtering

    Get PDF
    On-line monitoring of time series becomes more and more important in different areas of application like medicine, biometry and finance. In medicine, on-line monitoring of patients after transplantation of renals (Smith83) is an easy and prominent example. In finance, fast end reliable recognition of changes in level and trend of intra-daily stock market prices is of obvious interest for ordering and purchasing. In this project, we currently consider monitoring of surgical data like heart-rate, blood pressure and oxygenation. From a statistical point of view, on-line monitoring can be considered as on-line detection of changepoints in time series. That means, changepoints have to be detected in real time as new observations come in, usually in short time intervals. Retrospective detection of changepoints, after the whole batch of observations has been recorded, is nice but useless in monitoring patients during an operation. There are various statistical approaches conceivable for on-line detection of changepoints in time series. Dynamic or state space models seem particularly well suited because ``filtering'' has historically been developed exactly for on-line estimation of the ``state'' of some system. Our approach is based on a recent extension of the so-called multi-process Kalman filter for changepoint detection (Schnatter94). It turned out, however, that some important issues for adequate and reliable application have to be considered, in particular the (appropriate) handling of outliers and, as a central point, adaptive on-line estimation of control- or hyper-parameters. In this paper, we describe a filter model that has this features and can be implemented in such a way that it is useful for real time applications with high frequency time series data. Recently, simulation based methods for estimation of non-Gaussian dynamic models have been proposed that may also be adapted and generalized for the purpose of changepoint detection. Most of them solve the smoothing problem, but very recently some proposals have been made that could be useful also for filtering and, thus, for on-line monitoring (Kitagawa96a,Kitagawa96b,Shephard96). If these approaches are a useful alternative to our development needs a careful comparison in future and is beyond the scope of this paper

    Long-term gait measurements in daily life: Results from the Berlin Aging Study II (BASE-II)

    Get PDF
    BACKGROUND: Walking ability is an important prerequisite for activity, social participation and independent living. While in most healthy adults, this ability can be assumed as given, limitations in walking ability occur with increasing age. Furthermore, slow walking speed is linked to several chronic conditions and overall morbidity. Measurements of gait parameters can be used as a proxy to detect functional decline and onset of chronic conditions. Up to now, gait characteristics used for this purpose are measured in standardized laboratory settings. There is some evidence, however, that long-term measurements of gait parameters in the living environment have some advantages over short-term laboratory measurements. METHODS: We evaluated cross-sectional data from an accelerometric sensor worn in a subgroup of 554 participants of the Berlin Aging Study II (BASE-II). Data from the two BASE-II age groups (age between 22-36 years and 60-79 years) were used for the current analysis of accelerometric data for a minimum of two days and a maximum of ten days were available. Real world walking speed, number of steps, maximum coherent distance and total distance were derived as average data per day. Linear regression analyses were performed on the different gait parameters in order to identify significant determinants. Additionally, Mann-Whitney-U-tests were performed to detect sex-specific differences. RESULTS: Age showed to be significantly associated with real world walking speed and with the total distance covered per day, while BMI contributed negatively to the number of walking steps, maximum coherent distance and total distance walked. Additionally, sex was associated with walking steps. However, R2-values for all models were low. Overall, women had significantly more walking steps and a larger coherent distance per day when compared to men. When separated by age group, this difference was significant only in the older participants. Additionally, walking speed was significantly higher in women compared to men in the subgroup of older people. CONCLUSIONS: Age- and sex-specific differences have to be considered when objective gait parameters are measured, e.g. in the context of clinical risk assessment. For this purpose normative data, differentiating for age and sex would have to be established to allow reliable classification of long-term measurements of gait

    Treating Systematic Errors in Multiple Sclerosis Data

    Get PDF
    Multiple sclerosis (MS) is characterized by high variability between patients and, more importantly here, within an individual over time. This makes categorization and prognosis difficult. Moreover, it is unclear to what degree this intra-individual variation reflects the long-term course of irreversible disability and what is attributable to short-term processes such as relapses, to interrater variability and to measurement error. Any investigation and prediction of the medium or long term evolution of irreversible disability in individual patients is therefore confronted with the problem of systematic error in addition to random fluctuations. The approach described in this article aims to assist in detecting relapses in disease curves and in identifying the underlying disease course. To this end neurological knowledge was transformed into simple rules which were then implemented into computer algorithms for pre-editing disease curves. Based on simulations it is shown that pre-editing time series of disability measured with the Expanded Disability Status Scale (EDSS) can lead to more robust and less biased estimates for important disease characteristics, such as baseline EDSS and time to reach certain EDSS levels or sustained progression

    Reducing the Probability of False Positive Research Findings by Pre-Publication Validation - Experience with a Large Multiple Sclerosis Database

    Get PDF
    *Objective*
We have assessed the utility of a pre-publication validation policy in reducing the probability of publishing false positive research findings. 
*Study design and setting*
The large database of the Sylvia Lawry Centre for Multiple Sclerosis Research was split in two parts: one for hypothesis generation and a validation part for confirmation of selected results. We present case studies from 5 finalized projects that have used the validation policy and results from a simulation study.
*Results*
In one project, the "relapse and disability" project as described in section II (example 3), findings could not be confirmed in the validation part of the database. The simulation study showed that the percentage of false positive findings can exceed 20% depending on variable selection. 
*Conclusion*
We conclude that the validation policy has prevented the publication of at least one research finding that could not be validated in an independent data set (and probably would have been a "true" false-positive finding) over the past three years, and has led to improved data analysis, statistical programming, and selection of hypotheses. The advantages outweigh the lost statistical power inherent in the process

    The Normal Fetal Heart Rate Study: Analysis Plan

    Get PDF
    Recording of fetal heart rate via CTG monitoring has been routinely performed as an important part of antenatal and subpartum care for several decades. The current guidelines of the FIGO (ref1) recommend a normal range of the fetal heart rate from 110 to 150 bpm. However, there is no agreement in the medical community whether this is the correct range (ref2). We aim to address this question by computerized analysis (ref 3) of a high quality database (HQDb, ref 4) of about one billion electronically registered fetal heart rate measurements from about 10,000 pregnancies in three medical centres over seven years. In the present paper, we lay out a detailed analysis plan for this evidence-based project in the vein of the validation policy of the Sylvia Lawry Centre for Multiple Sclerosis Research (ref 5) with a split of the database into an exploratory part and a part reserved for validation. We will perform the analysis and the validation after publication of this plan in order to reduce the probability of publishing false positive research findings (ref 6-7)
    corecore